Session C-1

Human Sensing

Conference
2:00 PM — 3:30 PM EDT
Local
May 3 Tue, 2:00 PM — 3:30 PM EDT

Amaging: Acoustic Hand Imaging for Self-adaptive Gesture Recognition

Penghao Wang, Ruobing Jiang and Chao Liu (Ocean University of China, China)

1
A practical challenge common to state-of-the-art acoustic gesture recognition techniques is to adaptively respond to intended gestures rather than unintended motions during the real-time tracking on human motion flow. Besides, other disadvantages of under-expanded sensing space and vulnerability against mobile interference jointly impair the pervasiveness of acoustic sensing. Instead of struggling along the bottlenecked routine, we innovatively open up an independent sensing dimension of acoustic 2-D hand-shape imaging. We first deductively demonstrate the feasibility of acoustic imaging through multiple viewpoints dynamically generated by hand movement. Amaging, hand-shape imaging triggered gesture recognition, is then proposed to offer adaptive gesture responses. Digital Dechirp is novelly performed to largely reduce computational cost in demodulation and pulse compression. Mobile interference is filtered by Moving Target Indication. Multi-frame macro-scale imaging with Joint Time-Frequency Analysis is performed to eliminate image blur while maintaining adequate resolution. Amaging features revolutionary multiplicative expansion on sensing capability and dual dimensional parallelism for both hand-shape and gesture-trajectory recognition. Extensive experiments and simulations demonstrate Amaging's distinguishing hand-shape imaging performance, independent from diverse hand movement and immune against mobile interference. 96% hand-shape recognition rate is achieved with ResNet18 and 60× augmentation rate.

mmECG: Monitoring Human Cardiac Cycle in Driving Environments Leveraging Millimeter Wave

Xiangyu Xu (Southeast University, China); Jiadi Yu (Shanghai Jiao Tong University, China); Chenguang Ma (Ant Financial Services Group, China); Yanzhi Ren and Hongbo Liu (University of Electronic Science and Technology of China, China); Yanmin Zhu, Yi-Chao Chen and Feilong Tang (Shanghai Jiao Tong University, China)

0
The continuously increasing time spent on car trips in recent years brings growing attention to the physical and mental health of drivers on roads. As one of the key vital signs, the heartbeat is a critical indicator of drivers' health states. In this paper, we propose a contactless cardiac cycle monitoring system, mmECG, which leverages Commercial-Off-The-Shelf mmWave radar to estimate the fine-grained heart movements of drivers in moving vehicles. To further extract the minute heart movements of drivers and eliminate other influences in phase changes, we construct a movement mixture model to represent the phase changes caused by different movements, and further design a hierarchy variational mode decomposition(VMD) approach to extract and estimate the essential heart movement in mmWave signals. Finally, based on the extracted phase changes, mmECG reconstructs the cardiac cycle by estimating fine-grained movements of atria and ventricles leveraging a template-based optimization method. Experimental results involving 25 drivers in real driving scenarios demonstrate that mmECG can accurately estimate not only heart rates but also cardiac cycles of drivers in real driving environments.

Mudra: A Multi-Modal Smartwatch Interactive System with Hand Gesture Recognition and User Identification

Kaiwen Guo, Hao Zhou, Ye Tian and Wangqiu Zhou (University of Science and Technology of China, China); Yusheng Ji (National Institute of Informatics, Japan); Xiang-Yang Li (University of Science and Technology of China, China)

0
The great popularity of smartwatches leads to a growing demand for smarter interactive systems. Hand gesture is suitable for interaction due to its unique features. However, the existing single-modal gesture interactive systems have different biases in diverse scenarios, which makes it intractable to be applied in real life. In this paper, we propose a multi-modal smartwatch interactive system named Mudra, which fuses vision and Inertial Measurement Unit (IMU) signals to recognize and identify hand gestures for convenient and robust interaction. We carefully design a parallel attention multi-task model for different modals, and fuse classification results at the decision level with an adaptive weight adjustment algorithm. We implement a prototype of Mudra and collect data from 25 volunteers to evaluate its effectiveness. Extensive experiments demonstrate that Mudra can achieve 95.4% and 92.3% F1-scores on recognition and identification tasks, respectively. Meanwhile, Mudra can maintain stability and robustness under different experimental settings, and 87% of users consider it provides a convenient and reliable way to interact with the smartwatch.

Sound of Motion: Real-time Wrist Tracking with A Smart Watch-Phone Pair

Tianyue Zheng and Cai Chao (Nanyang Technological University, Singapore); Zhe Chen (School of Computer Science and Engineering, Nangyang Technological University, Singapore); Jun Luo (Nanyang Technological University, Singapore)

0
Proliferation of smart environments entails the need for real-time and ubiquitous human-machine interactions through, mostly likely, hand/arm motions. Though a few recent efforts attempt to track hand/arm motions in real-time with COTS devices, they either obtain a rather low accuracy or have to rely on a carefully designed infrastructure and some heavy signal processing. To this end, we propose SoM (Sound of Motion) as a lightweight system for wrist tracking. Requiring only a smart watch-phone pair, SoM entails very light computations that can operate in resource constrained smartwatches. SoM uses embedded IMU sensors to perform basic motion tracking in the smartwatch, and it depends on the fixed smartphone to act as an "acoustic anchor": regular beacons sent by the phone are received in an irregular manner due to the watch motion, and such variances provide useful hints to adjust the drifting of IMU tracking. Using extensive experiments on our SoM prototype, we demonstrate that the delicately engineered system achieves a satisfactory wrist tracking accuracy and strikes a good balance between complexity and performance.

Session Chair

Jun Luo (Nanyang Technological University)

Session C-2

IoT

Conference
4:00 PM — 5:30 PM EDT
Local
May 3 Tue, 4:00 PM — 5:30 PM EDT

DBAC: Directory-Based Access Control for Geographically Distributed IoT Systems

Luoyao Hao, Vibhas V Naik and Henning Schulzrinne (Columbia University, USA)

0
We propose and implement Directory-Based Access Control (DBAC), a flexible and systematic access control approach for geographically distributed multi-administration IoT systems. DBAC designs and relies on a particular module, IoT directory, to store device metadata, manage federated identities, and assist with cross-domain authorization. The directory service decouples IoT access into two phases: discover device information from directories and operate devices through discovered interfaces. DBAC extends attribute-based authorization and retrieves diverse attributes of users, devices, and environments from multi-faceted sources via standard methods, while user privacy is protected. To support resource-constrained devices, DBAC assigns a capability token to each authorized user, and devices only validate tokens to process a request.

IoTMosaic: Inferring User Activities from IoT Network Traffic in Smart Homes

Yinxin Wan, Kuai Xu, Feng Wang and Guoliang Xue (Arizona State University, USA)

0
Recent advances in cyber-physical systems, artificial intelligence, and cloud computing have driven the wide deployments of Internet-of-things (IoT) in smart homes. As IoT devices often directly interact with the users and environments, this paper studies if and how we could explore the collective insights from multiple heterogeneous IoT devices to infer user activities for home safety monitoring and assisted living. Specifically, we develop a new system, namely IoTMosaic, to first profile diverse user activities with distinct IoT device event sequences, which are extracted from smart home network traffic based on their TCP/IP data packet signatures. Given the challenges of missing and out-of-order IoT device events due to device malfunctions or varying network and system latencies, IoTMosaic further develops simple yet effective approximate matching algorithms to identify user activities from real-world IoT network traffic. Our experimental results on thousands of user activities in the smart home environment over two months show that our proposed algorithms can infer different user activities from IoT network traffic in smart homes with the overall accuracy, precision, and recall of 0.99, 0.99, and 1.00, respectively.

Physical-Level Parallel Inclusive Communication for Heterogeneous IoT Devices

Sihan Yu (Clemson University, USA); Xiaonan Zhang (Florida State University, USA); Pei Huang and Linke Guo (Clemson University, USA)

0
The lack of spectrum resources put a hard limit of managing the large-scale heterogeneous IoT system. Although previous works alleviate this strain by coordinating transmission power, time slots, and sub-channels, they may not be feasible in future IoT applications with dense deployments. Taking Wi-Fi and ZigBee coexistence as an example, in this paper we explore a physical-level parallel inclusive communication paradigm, which leverages novel bits embedding approaches on the OQPSK protocol to enable both Wi-Fi and ZigBee IoT devices to decode the same inclusive signal at the same time but with each one's different data. By carefully crafting the inclusive signal using legacy Wi-Fi protocol, the overlapping spectrum can be simultaneously re-used by both protocols, expecting a maximum data rate (250kbps) for ZigBee devices and up to 3.75Mbps for a Wi-Fi pair over only a 2MHz bandwidth. The achieved spectrum efficiency outperforms a majority of Cross-technology Communication schemes. Compared with existing works, our proposed system is the first one that achieves entire software-level design, which can be readily implemented on Commercial-Off-The-Shelf devices without any hardware modification. Based on extensive real-world experiments on both USRP and COTS device platforms, we demonstrate the feasibility, generality, and efficiency of the proposed new paradigm.

RF-Protractor: Non-Contacting Angle Tracking via COTS RFID in Industrial IoT Environment

Tingjun Liu, Chuyu Wang, Lei Xie and Jingyi Ning (Nanjing University, China); Tie Qiu (Tianjin University, China); Fu Xiao (Nanjing University of Posts and Telecommunications, China); Sanglu Lu (Nanjing University, China)

0
As a key component of most machines, the status of rotation shaft is a crucial issue in the factories, which affects both the industrial safety and product quality. Tracking the rotation angle can monitor the rotation shaft, but traditional solutions either rely on specialized sensors, suffering from intrusive transformation, or use CV-based solutions, suffering from poor light conditions. In this paper, we present a non-contacting low-cost solution, RF-Protractor, to track the rotation shaft based on the surrounding RFID tags. Particularly, instead of directly attaching the tags to the shaft, we deploy the tags beside the shaft and leverage the polarization effect of the reflection signal from the shaft for angle tracking. To improve the polarization effect, we place an aluminum-foil on the shaft turntable, requiring no transformation. We firstly build a polarization model to quantify the relationship between rotation angle and reflection signal. We then propose to combine the signals of multiple tags to cancel the reflection effect and estimate the environment-related parameter to calibrate the model. Finally, we propose to leverage both the power trend and the IQ-signal to estimate the rotation angle. The extensive experiments show that RF-Protractor achieves an average error of 3.1° in angle tracking.

Session Chair

Tarek Abdelzaher (University of Illinois Urbana-Champaign)

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · © 2022 Duetone Corp.